2 research outputs found

    Dynamic Speed and Separation Monitoring with On-Robot Ranging Sensor Arrays for Human and Industrial Robot Collaboration

    Get PDF
    This research presents a flexible and dynamic implementation of Speed and Separation Monitoring (SSM) safety measure that optimizes the productivity of a task while ensuring human safety during Human-Robot Collaboration (HRC). Unlike the standard static/fixed demarcated 2D safety zones based on 2D scanning LiDARs, this research presents a dynamic sensor setup that changes the safety zones based on the robot pose and motion. The focus of this research is the implementation of a dynamic SSM safety configuration using Time-of-Flight (ToF) laser-ranging sensor arrays placed around the centers of the links of a robot arm. It investigates the viability of on-robot exteroceptive sensors for implementing SSM as a safety measure. Here the implementation of varying dynamic SSM safety configurations based on approaches of measuring human-robot separation distance and relative speeds using the sensor modalities of ToF sensor arrays, a motion-capture system, and a 2D LiDAR is shown. This study presents a comparative analysis of the dynamic SSM safety configurations in terms of safety, performance, and productivity. A system of systems (cyber-physical system) architecture for conducting and analyzing the HRC experiments was proposed and implemented. The robots, objects, and human operators sharing the workspace are represented virtually as part of the system by using a digital-twin setup. This system was capable of controlling the robot motion, monitoring human physiological response, and tracking the progress of the collaborative task. This research conducted experiments with human subjects performing a task while sharing the robot workspace under the proposed dynamic SSM safety configurations. The experiment results showed a preference for the use of ToF sensors and motion capture rather than the 2D LiDAR currently used in the industry. The human subjects felt safe and comfortable using the proposed dynamic SSM safety configuration with ToF sensor arrays. The results for a standard pick and place task showed up to a 40% increase in productivity in comparison to a 2D LiDAR

    A Brain Computer Interface for Interactive and Intelligent Image Search and Retrieval

    Get PDF
    This research proposes a Brain Computer Interface as an interactive and intelligent Image Search and Retrieval tool that allows users, disabled or otherwise to browse and search for images using brain signals. The proposed BCI system implements decoding the brain state by using a non-invasive electroencephalography (EEG) signals, in combination with machine learning, artificial intelligence and automatic content and similarity analysis of images. The user can spell search queries using a mental typewriter (Hex-O-Speller), and the resulting images from the web search are shown to the user as a Rapid Serial Visual Presentations (RSVP). For each image shown, the EEG response is used by the system to recognize the user\u27s interests and narrow down the search results. In addition, it also adds more descriptive terms to the search query, and retrieves more specific image search results and repeats the process. As a proof of concept, a prototype system was designed and implemented to test the navigation through the interface and the Hex-o-Speller using an event-related potential(ERP) detection and classification system. A comparison of different feature extraction methods and classifiers is done to study the detection of event related potentials on a standard data set. The results and challenges faced were noted and analyzed. It elaborates the implementation of the data collection system for the Brain Computer Interface and discusses the recording of events during the visual stimulus and how they are used for epoching/segmenting the data collected. It also describes how the data is stored during training sessions for the BCI. Description of various visual stimuli used during training is also given. The preliminary results of the real time implementation of the prototype BCI system are measured by the number of times the user/subject was successful in navigating through the interface and spelling the search keyword \u27FOX\u27 using the mental-typewriter Hex-O-Speller. Out of ten tries the user/subject was successful six times
    corecore